Normalized Maximal Margin Loss for Open-Set Image Classification
نویسندگان
چکیده
منابع مشابه
Maximal Margin Classification for Metric Spaces
In this article we construct a maximal margin classification algorithm for arbitrary metric spaces. At first we show that the Support Vector Machine (SVM) is a maximal margin algorithm for the class of metric spaces where the negative squared distance is conditionally positive definite (CPD). This means that the metric space can be isometrically embedded into a Hilbert space, where one performs...
متن کاملA New Approximate Maximal Margin Classification Algorithm
A new incremental learning algorithm is described which approximates the maximal margin hyperplane w.r.t. norm p ≥ 2 for a set of linearly separable data. Our algorithm, called almap (Approximate Large Margin algorithm w.r.t. norm p), takes O ( (p−1) α2 γ2 ) corrections to separate the data with p-norm margin larger than (1 − α) γ, where γ is the (normalized) p-norm margin of the data. almap av...
متن کاملA Maximal Margin Classification Algorithm Based on Data Field
This paper puts forward a new maximal margin classification algorithm based on general data field (MMGDF). This method transforms the linear inseparable problem into finding the nearest points in the general data field (GDF). GDF is inspired by the physical field. Different dimensions represent the different properties. Not all attributes play a decisive role in the classification process. Ther...
متن کاملOpen-Set Classification for Automated Genre Identification
Automated Genre Identification (AGI) of web pages is a problem of increasing importance since web genre (e.g. blog, news, eshops, etc.) information can enhance modern Information Retrieval (IR) systems. The state-of-the-art in this field considers AGI as a closed-set classification problem where a variety of web page representation and machine learning models have intensively studied. In this p...
متن کاملSpectrally-normalized Margin Bounds for Neural Networks
We present a generalization bound for feedforward neural networks with ReLU activations in terms of the product of the spectral norm of the layers and the Frobenius norm of the weights. The key ingredient is a bound on the changes in the output of a network with respect to perturbation of its weights, thereby bounding the sharpness of the network. We combine this perturbation bound with the PAC...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Access
سال: 2021
ISSN: 2169-3536
DOI: 10.1109/access.2021.3068042